4 research outputs found

    An investigation of feature models for music genre classification using the support vector classifier

    Get PDF
    In music genre classification the decision time is typically of the order of several seconds, however, most automatic music genre classification systems focus on short time features derived from 10?50ms. This work investigates two models, the multivariate Gaussian model and the multivariate autoregressive model for modelling short time features. Furthermore, it was investigated how these models can be integrated over a segment of short time features into a kernel such that a support vector machine can be applied. Two kernels with this property were considered, the convolution kernel and product probability kernel. In order to examine the different methods an 11 genre music setup was utilized. In this setup the Mel Frequency Cepstral Coefficients were used as short time features. The accuracy of the best performing model on this data set was 44% compared to a human performance of 52% on the same data set

    Estimating the Support of a High-Dimensional Distribution

    No full text
    Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a “simple” subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data

    Kernel ellipsoidal trimming

    No full text
    Ellipsoid estimation is important in many practical areas such as control, system identification, visual/audio tracking, experimental design, data mining, robust statistics and statistical outlier or novelty detection. A new method, called kernel minimum volume covering ellipsoid (KMVCE) estimation, that finds an ellipsoid in a kernel-defined feature space is presented. Although the method is very general and can be applied to many of the aforementioned problems, the main focus is on the problem of statistical novelty/outlier detection. A simple iterative algorithm based on Mahalanobis-type distances in the kernel-defined feature space is proposed for practical implementation. The probability that a non-outlier is misidentified by our algorithms is analyzed using bounds based on Rademacher complexity. The KMVCE method performs very well on a set of real-life and simulated datasets, when compared with standard kernel-based novelty detection methods

    Bibliography

    No full text
    corecore